106 research outputs found
Why Does Flow Director Cause Packet Reordering?
Intel Ethernet Flow Director is an advanced network interface card (NIC)
technology. It provides the benefits of parallel receive processing in
multiprocessing environments and can automatically steer incoming network data
to the same core on which its application process resides. However, our
analysis and experiments show that Flow Director cannot guarantee in-order
packet delivery in multiprocessing environments. Packet reordering causes
various negative impacts. E.g., TCP performs poorly with severe packet
reordering. In this paper, we use a simplified model to analyze why Flow
Director can cause packet reordering. Our experiments verify our analysis
A Hybrid Wavelet-Based Method for the Peak Detection of Photoplethysmography Signals
The noninvasive peripheral oxygen saturation (SpO2) and the pulse rate can be extracted from photoplethysmography (PPG) signals. However, the accuracy of the extraction is directly affected by the quality of the signal obtained and the peak of the signal identified; therefore, a hybrid wavelet-based method is proposed in this study. Firstly, we suppressed the partial motion artifacts and corrected the baseline drift by using a wavelet method based on the principle of wavelet multiresolution. And then, we designed a quadratic spline wavelet modulus maximum algorithm to identify the PPG peaks automatically. To evaluate this hybrid method, a reflective pulse oximeter was used to acquire ten subjects’ PPG signals under sitting, raising hand, and gently walking postures, and the peak recognition results on the raw signal and on the corrected signal were compared, respectively. The results showed that the hybrid method not only corrected the morphologies of the signal well but also optimized the peaks identification quality, subsequently elevating the measurement accuracy of SpO2 and the pulse rate. As a result, our hybrid wavelet-based method profoundly optimized the evaluation of respiratory function and heart rate variability analysis
Identifying the Causes of Ship Collisions Accident Using Text Mining and Bayesian Networks
Under the backdrop of the robust growth of the global economy, the water transport industry is experiencing rapid development, resulting in an increase in ship collisions and a critical water traffic safety situation. This study uses text mining techniques to gather a corpus of data. The corpus includes human factors, ship factors, natural environmental factors, and management factors, which are used as target data to obtain a high-dimensional sparse original feature vector space set comprising eigenvalues and eigenvalue weight attributes. Chi-square statistics are utilised to reduce dimensionality, resulting in a final set of 33-dimensional text feature items that determine the causal factors of ship collision risk. Taking the four steps involved in the collision process as the primary focus, a Bayesian network structure for ship collision risk is constructed based on the “human-ship-environment-management” system. By incorporating existing ship collision accident/danger reports, conditional probability tables are computed for each node in the Bayesian network structure, enabling the modelling of ship collision risk. The model is validated through an example, revealing that, under relevant conditions, the probability of collision exceeds 90 %. This finding demonstrates the validity of the model and allows one to deduce the primary cause of ship collision accidents
MasterRTL: A Pre-Synthesis PPA Estimation Framework for Any RTL Design
In modern VLSI design flow, the register-transfer level (RTL) stage is a
critical point, where designers define precise design behavior with hardware
description languages (HDLs) like Verilog. Since the RTL design is in the
format of HDL code, the standard way to evaluate its quality requires
time-consuming subsequent synthesis steps with EDA tools. This time-consuming
process significantly impedes design optimization at the early RTL stage.
Despite the emergence of some recent ML-based solutions, they fail to maintain
high accuracy for any given RTL design. In this work, we propose an innovative
pre-synthesis PPA estimation framework named MasterRTL. It first converts the
HDL code to a new bit-level design representation named the simple operator
graph (SOG). By only adopting single-bit simple operators, this SOG proves to
be a general representation that unifies different design types and styles. The
SOG is also more similar to the target gate-level netlist, reducing the gap
between RTL representation and netlist. In addition to the new SOG
representation, MasterRTL proposes new ML methods for the RTL-stage modeling of
timing, power, and area separately. Compared with state-of-the-art solutions,
the experiment on a comprehensive dataset with 90 different designs shows
accuracy improvement by 0.33, 0.22, and 0.15 in correlation for total negative
slack (TNS), worst negative slack (WNS), and power, respectively.Comment: To be published in the Proceedings of 42nd IEEE/ACM International
Conference on Computer-Aided Design (ICCAD), 202
Recommended from our members
The Performance Analysis of Linux Networking - Packet Receiving
The computing models for High-Energy Physics experiments are becoming ever more globally distributed and grid-based, both for technical reasons (e.g., to place computational and data resources near each other and the demand) and for strategic reasons (e.g., to leverage equipment investments). To support such computing models, the network and end systems, computing and storage, face unprecedented challenges. One of the biggest challenges is to transfer scientific data sets--now in the multi-petabyte (10{sup 15} bytes) range and expected to grow to exabytes within a decade--reliably and efficiently among facilities and computation centers scattered around the world. Both the network and end systems should be able to provide the capabilities to support high bandwidth, sustained, end-to-end data transmission. Recent trends in technology are showing that although the raw transmission speeds used in networks are increasing rapidly, the rate of advancement of microprocessor technology has slowed down. Therefore, network protocol-processing overheads have risen sharply in comparison with the time spent in packet transmission, resulting in degraded throughput for networked applications. More and more, it is the network end system, instead of the network, that is responsible for degraded performance of network applications. In this paper, the Linux system's packet receive process is studied from NIC to application. We develop a mathematical model to characterize the Linux packet receiving process. Key factors that affect Linux systems network performance are analyzed
- …